Goto

Collaborating Authors

 intel hardware


Intel Collaboration With Deci Boosts AI Performance on Intel Hardware

#artificialintelligence

Scott Bair is a Senior Technical Creative Director for Intel Labs, chartered with growing awareness for Intel's leading-edge research activities, like AI, Neuromorphic Computing and Quantum Computing. Scott is responsible for driving marketing strategy, messaging, and asset creation for Intel Labs and its joint-research activities. In addition to his work at Intel, he has a passion for audio technology and is an active father of 5 children. Scott has over 23 years of experience in the computing industry bringing new products and technology to market. During his 15 years at Intel, he has worked in a variety of roles from R&D, architecture, strategic planning, product marketing, and technology evangelism.


How to Accelerate TensorFlow on Intel Hardware

#artificialintelligence

When deploying deep learning models, inference speed is usually measured in terms of latency or throughput, depending on your application's requirements. Latency is how quickly you can get an answer, whereas throughput is how much data the model can process in a given amount of time. Both use cases benefit from accelerating the inference operations of the deep learning framework running on the target hardware. Engineers from Intel and Google have collaborated to optimize TensorFlow* running on Intel hardware. This work is part of the Intel oneAPI Deep Neural Network Library (oneDNN) and available to use as part of standard TensorFlow.


Deploy Deep Learning Object Detection on Intel Hardware

#artificialintelligence

Visual computing has used OpenCV algorithms to detect objects for decades. Deep learning inference takes computer vision to entirely new levels of sophistication with support for poor lighting, off-angled shots, and subtle flaws. What exactly is deep learning object detection? Deep learning object detection combines two computer vision tasks: localization and classification. In localization, the model identifies objects in an image and draws a bounding box around them.


Auriga Attends Intel Experience Day 2019

#artificialintelligence

Intel Experience Day 2019, organized by Intel, one of the major innovative hardware and technology corporations worldwide, took place in Moscow at the end of October. Intel and partner companies presented the latest Intel hardware and software product implementations advancing IoT, AI, computer vision, machine learning, object recognition, and more. Many speakers shared their ideas and insights on trending industrial innovations like cloud computing, Big Data, and analytics, including Al Diaz, Intel's Vice President, Natalya Galyan, Intel's Regional Director for Russia, and Marina Alekseeva, CEO of R&D of Intel in Russia. Intel Experience Day 2019 attracted many IT market players who use Intel solutions in their work daily, and Auriga experts were among them. Several years ago, Auriga became a pioneer user of the Intel Multi-OS Engine tool to develop an innovative iPad application for patient monitoring.


Intel AI Summit: New 'Keem Bay' Edge VPU, AI Product Roadmap

#artificialintelligence

At its AI Summit today in San Francisco, Intel touted a raft of AI training and inference hardware for deployments ranging from cloud to edge and designed to support organizations at various points of their AI journeys. The company revealed its Movidius Myriad Vision Processing Unit (VPU), codenamed "Keem Bay," for edge media, computer vision and inference applications. The company said the VPU, available the first half of 2020, incorporates "highly efficient architectural advances" and will deliver more than 10 times the inference performance of current Movidius VPUs and up to six times the power efficiency of competitor processors. Intel claimed that "early performance testing indicates that Keem Bay will offer more than 4x the inference throughput of Nvidia's similar-range TX2 SOC at one third less power, and nearly equivalent throughput of Nvidia's next higher class SOC, Nvidia Xavier, at one fifth the power. Keem Bay will also be supported by Intel's OpenVINO Toolkit for development of computer vision applications – "addresses a key pain point for developers -- allowing them to try, prototype and test AI solutions on a broad range of Intel processors before they buy hardware," according to Intel. It also will be incorporated into Intel's newly announced Dev Cloud for the Edge, launched today, designed to allow developers to test algorithms on any Intel hardware. Intel also offered the first live demonstrations and additional architectural details of its Nervana Neural Network Processors for training (NNP-T1000) and inference (NNP-I1000) ASICS for cloud and data center environments, first announced last August at the Hot Chips conference. In discussing the company's AI products roadmap (see above), Naveen Rao, corporate VP/GM of Intel's AI Products Group, said the combination of "the new Intel hardware will enable the industry to embrace much larger and more complex AI algorithms, expanding what can be achieved with AI in the cloud and data center, an edge server, or an IoT device." "With this next phase of AI, we're reaching a breaking point in terms of computational hardware and memory," said Rao. "Purpose-built hardware like Intel Nervana NNPs and Movidius Myriad VPUs are necessary to continue the incredible progress in AI.


Windows* Machine Learning: AI Acceleration on Intel Hardware

#artificialintelligence

Artificial intelligence (AI) is spanning across server, desktop, and edge markets. Intel has AI solutions spanning server, desktop, and edge markets. From the CPU infrastructure, Intel Xeon processors, Intel Core processors, and Intel Atom processors providing the basis of AI processing, to dedicated acceleration provided by Intel Iris Graphics, the low wattage Intel Movidius vision processing unit (VPU), new Intel Gaussian Network Accelerator (GNA), Mobileye* automotive technology, and dedicated Intel field-programmable gate array (FPGA) custom integration, the Intel AI product offerings span the gamut of applications. Windows* Machine Learning (ML) is an inference engine running on the edge on the Windows operating system (OS) and provides a very simple developer interface that will be optimized under the hood for Intel hardware. Intel is working very closely with Microsoft to ensure that the hardware optimization using Windows ML are state of the art acceleration of model evaluation.


Optimizing Machine Learning with TensorFlow

#artificialintelligence

In our webinar "Optimizing Machine Learning with TensorFlow" we gave an overview of some of the impressive optimizations Intel has made to TensorFlow when using their hardware. You can find a link to the archived video here. During the webinar, Mohammad Ashraf Bhuiyan, Senior Software Engineer in Intel's Artificial Intelligence Group, and myself spoke about some of the common use cases that require optimization as well as benchmarks demonstrating order-of-magnitude speed improvements when running on Intel hardware. TensorFlow, Google's library for machine learning (ML), has become the most popular machine learning library in a fast-growing ecosystem. This library has over 77k stars on GitHub and is widely used in a growing number of business critical applications.